Journal of Liaoning Petrochemical University
  Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Optimal Consensus of Heterogeneous Multi⁃Agent Systems Based on Q⁃Learning
Weiran Cheng, Jinna Li
Abstract231)   HTML12)    PDF (674KB)(580)      

This paper proposes a model?free control protocol design method based on off?policy reinforcement learning for solving the optimal consensus problem of heterogeneous multi?agent systems with leaders. The dynamic expression of local neighborhood error is complicated for the heterogeneous multi?agent systems because of its different system state matrices. Compared with the existing solution of designing observer for distributed control of multi?agent system, the method of solving global neighborhood error state expression proposed in this paper reduces the complexity of calculation. Firstly, the dynamic expression of global neighborhood error of multi?agent system constructed from augmented variables is established. Secondly, the coupled Bellman equation and HJB equation are obtained through the value function of quadratic form. Then, the Nash equilibrium solution of the multi?agent optimal consensus is obtained by solving the optimal solution of the coupled HJB equation, and the Nash equilibrium proof is given. Thirdly, an off?policy Q?learning algorithm is proposed to learn the Nash equilibrium solution of the multi?agent optimal consensus. Then, the proposed algorithm is implemented by using the critic neural network structure and gradient descent method. Finally, a simulation example is given to verify the effectiveness of the proposed algorithm.

2022, 42 (4): 59-67. DOI: 10.3969/j.issn.1672-6952.2022.04.011